Skip to content

add multi-node training script for distributed training setup#511

Open
kunwar-vikrant wants to merge 1 commit intokarpathy:masterfrom
kunwar-vikrant:multinode_training
Open

add multi-node training script for distributed training setup#511
kunwar-vikrant wants to merge 1 commit intokarpathy:masterfrom
kunwar-vikrant:multinode_training

Conversation

@kunwar-vikrant
Copy link

@kunwar-vikrant kunwar-vikrant commented Feb 6, 2026

This pull request introduces a new script, runs/multinode.sh, which provides a comprehensive workflow for distributed multi-node training, evaluation, and SFT (Supervised Fine-Tuning) for the project. The script automates environment setup, data preparation, distributed training, evaluation, and reporting, making it easier to run scalable experiments across multiple servers.

The most important changes are:

Distributed Training & Evaluation Automation

  • Added runs/multinode.sh, a full-featured bash script to automate multi-node distributed training, evaluation, and SFT using torchrun, with configuration via environment variables for node rank, master address, number of nodes, and GPUs per node.
  • Integrated NCCL configuration and network interface selection for reliable multi-node communication, including optional debug logging for diagnosing issues.

Environment & Data Preparation

  • Automated Python environment setup using uv and local virtual environment creation, ensuring all dependencies are installed with GPU extras.
  • Implemented parallel and background dataset downloading to speed up data preparation, with logic to monitor and wait for completion before training starts.
  • Included steps for tokenizer training and evaluation to ensure consistency across nodes.

Workflow Enhancements & Safety

  • Added signal handling for graceful shutdown and cleanup of background processes and child jobs on script interruption.

@svlandeg svlandeg added feature New feature or request scripts Edits in the bash scripts labels Feb 9, 2026
@suspicious-pineapple
Copy link

What kind of speedup did you see in your testing?

@kunwar-vikrant
Copy link
Author

kunwar-vikrant commented Feb 9, 2026

well i was testing on three servers , each with 8XH100 GPUs, training time was around an hour, compared to training on a single server 8XH100 GPUs which was taking ~3 hours.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

feature New feature or request scripts Edits in the bash scripts

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants